# Load important libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inlineChapter 2 - Dimension Reduction
Unsupervised Learning
The next two chapters focus on unsupervised learning, a field of machine learning. Unsupervised learning happens when we do not have labels for our data, in that there is no target value. Because there are no labels to “supervise” the learning we need to gather insight from the data without labels, which is where methods in unsupervised learning come in. Unsupervised learning is a broad topic. In this course we will just cover two main areas: dimension reduction and clustering.
There is a considerable amount of technical backend to the topics discussed in these chapters. We will provide the minimum theory / mathematics relevant but would suggest completing the ML Theory course for a more comprehensive exploration.
Dimension Reduction
Data Dimension Reduction is an Unsupervised Learning technique; here, the aim is not predicting a target variable and we do not have our data labelled, instead we aim to reduce the size of our data to the managable scale for further analysis. The information within this chapter will be a useful supplement to the Data Preparation section of Introduction to Machine Learning.
As our data gets larger in number of features, we will often struggle to determine the signal from the noise. With more features given to use, it can be hard to decide which are important to our machine learning models, and how we can show these features; furthermore, our features can be highly correlated and it is hard to detect potential multicolinearity. With a greater scale of data, we might start to ask questions that haven’t been tackled in the previous machine learning sections, these could be:
- How to make our models run faster and better?
- How can I illustrate this complex data?
- What are the most important parts of this data?
How to visualise and manipulate data of many dimensions is tackled using an area of Unsupervised Learning called Dimension Reduction. This topic focuses on reducing the number of features in our data by combining them in a way which maximises the amount of information contained. This recreates our data with new, fewer features, which we can then work with. The reduced features can then be incorporated into other machine learning algorithms.
0.2 Learning Outcomes
By the end of this course you should be able to:
- Understand what dimension reduction is.
- Understand the effect of dimension reduction techniques.
- Be aware of the benefits and difficulties of using these techniques.
- Be aware of how Principle Component Analysis works.
- Be able to implement dimension reduction in python for machine learning.
- Be aware of how t-SNE works and when to use it.
1 Lossy Compression
What is dimension reduction? In our machine learning context, it is the reduction of the number of features (columns) of our dataset, whilst minimising the loss of information.
This generally falls under the topic of “Lossy Compression”.
The Compression part shows that we are reducing the quantity of total data that we have in our data set.
The Lossy part denotes that we are going to inherently lose information by reducing the dimensions of our data. Our goal for good dimension reduction is to minimise the amount of useful information lost. This is different to lossless compression, such as zipping a file.
How we mathematically implement dimension reduction will be covered in later sections. This section will highlight the characteristics of general dimension reduction.
For a more in depth exploration and further mathematics - see the ML Theory course.
Our original data set will be denoted by \(X\) throughout, we presume that all of our data is numerical.
\(X\) has several qualities that are useful for our machine learning models. These include information such as:
- Distributions of data (ranges, shapes, variances, averages and more)
- Correlations/covariance between features
We can convert our different features into a new set of features, each of these new features is some combination of our original features. For now this will be called encoding our data.
Our goal is to transform our data from one set of features to another smaller number of features.
We will denote encoding our data using the function \(f(X)\) which will produce a new, smaller data set \(X'\).
\[f(X) \rightarrow X'\]
We now have a new, dimension reduced data set to work with.
However, just reducing the dimensions does not inherently mean that we have a good new data set. We could be removing the most important columns to make the data smaller!
A basic (and bad!) example of \(f(X)\) would be to randomly drop a subset of columns. Alternatively, we could use Lasso regression to remove select columns whose gradient goes to zero when regularization happens first.
To produce a good dimension reduction, we need to know how much information has been lost in the encoding. To do this we introduce a decoder function \(g(X')\) which will transform our reduced dimension data into the original number of dimensions. This is done because we cannot compare data of different dimensions directly.
Where \(f(X)\) reduces the number of dimensions, \(g(X')\) increases them back to the original size.
Consider it as a forward then backward transformation.
Or in a more “real-world” situation: heating a sauce (\(f(X)\) encoding) to reduce the amount of water in it, concentrating the flavour (information).
By this analogy we add water back to the sauce to bring it back to its original concentration (\(g(X')\) decoding).
We show the new data as \(X''\).
\[g(X') \rightarrow X''\]
\[\therefore~~ g(f(X)) \rightarrow X''\]
This data that has been decoded is in the original number of dimensions, but crucially; the data is not the same as the original data. By encoding and decoding our data using dimension reduction we have lost information.
Using our cooking analogy; the decoded sauce will be the same concentration as the original sauce, but it won’t taste the same as we have gone through an irreversible process (heating/encoding).
This analogy is of course a bit of a stretch, but should be clearer as we look into actual implementations.
Now that we have two data sets of the same dimension \(X\) and \(X''\) we can compare them. \(X\) will have more useful information than \(X''\).
Our goal in dimension reduction is to reduce the information difference between our original data and decoded data. As a pseudo equation this means we are trying to:
\[ minimise( ~ info(X) - info(X'') ~) \]
Our task involves finding the best functions \(f(X)\) and \(g(X')\) to encode and decode our data into a smaller dimension and back.
Using the code given below, find a value of alpha which results in only two features of the model having non-zero gradients. Use this reuslt to find the two most important features in the small data set using lasso regression.
Data Preparation
from sklearn.preprocessing import RobustScaler
# Load data
bikes_filepath = '../../data/bikes.csv'
bikes_data = pd.read_csv(filepath_or_buffer=bikes_filepath, delimiter=",")
# select only numeric data
bikes_data_numeric = bikes_data.select_dtypes(include="number")
# remove missing data
bikes_data_numeric = bikes_data_numeric.dropna()
# Separate X and y
y = bikes_data_numeric["count"]
X_unscaled = bikes_data_numeric.drop(columns=["count"])
# basic feature scaling
X = RobustScaler().fit_transform(X_unscaled)Model tuning, find an appropriate value of alpha below.
from sklearn.linear_model import Lasso
# Select your value of alpha, the initial one gives a bad result
alpha = 1
# Train a model on all the data
lasso_model = Lasso(alpha=alpha).fit(X, y)
# show resulting coefficients, we want two to be non-zero
lasso_model.coef_2 Matrix Decomposition
2.0.1 An Introduction
Some methods of dimension reduction rely on a process called Singular Value Decomposition, this is a method of linear algebra which allows us to change the dimensions of matrices. We are going to cover this topic at a high level so the intuition of the method is understood. An in-depth understanding of linear algebra is not needed for this course.
Before we look into decomposing matrices we are first going to look at how we can represent our data in a more mathematical format.
2.1 Representing Data
So far we have looked at our data in tables using pandas and then briefly in numpy arrays. These are data structures that contain our elements of data, what we need is a way to represent our data in a way we can mathematically work with.
We can represent a column in our data as a vector \(x\), with \(n\) rows.
The elements of the vector correspond to each of our data points in the column. \[ x = \left( \begin{array}{c} x_1\\ x_2\\ ...\\ x_n\\ \end{array} \right) \]
Vectors allow us to represent directions in space.
We can convert our vector from vertical to horizontal (column to a row) using a matrix transposition which is denoted by a \(^T\) after our vector.
The transpose of \(x\) is therefore: \[ x^T = \left( \begin{array}{c} x_1\\ x_2\\ ...\\ x_n\\ \end{array} \right)^T = \left( \begin{array}{r} x_1~~x_2~~...~~x_n\\ \end{array} \right) \]
Our data of many columns can be represented as multiple vectors put together into a matrix. If we have \(p\) vectors with \(n\) elements we can concatenate them to a matrix \(X\) with \(n\) rows and \(p\) columns.
Each element in the matrix is given by the indexes \(i\) and \(j\), the row and column number respectively. The element in the first row, second column is therefore \(x_{1,2}\).
Our matrix then takes the form:
\[ X = \left( \begin{array}{ccc} x_{1,1}~~x_{1,2}~~...~~x_{1,p}\\ x_{2,1}~~x_{2,2}~~...~~x_{2,p}\\ x_{3,1}~~x_{3,2}~~...~~x_{3,p}\\ ...~~...~~...~~...\\ x_{n,1}~~x_{n,2}~~...~~x_{n,p}\\ \end{array} \right) \]
We can now apply functions to our matrix to manipulate our data.
Having multiple vectors allows us to represent not just a direction in space, but with two vectors a plane, and more than two a volume.
2.2 Singular Value Decomposition
We can think of performing SVD on our data as untangling the relationships between our features so our new columns are not correlated with each other. We transform data into a new space based on principal components (eigen vectors). We are maximising the variance along dimensions that are mostly not correlated.
For Singular Value Decomposition (SVD) to work we are going to assume our data is centered. This means that the mean of each column’s values is zero for all columns. This is achieved by subtracting the columns mean from all values in the column, in a similar manner to how we scaled data in Chapter 1: Data Preparation.
Here is an example of performing SVD on a data set with two features. We convert the two features \(x_1\) and \(x_2\) into two new dimensions (component 1 & 2) by rotating and scaling our data. The components found by the decomposition are shown on both plots. If we had data of more than two features, we could reduce those dimensions down to two in order to plot them.
The most important dimension is given by the longest vector.
Figure A.
How this graph was created is shown in the extras folder under “SVD Diagram Production.ipynb”.
We have discussed previously that our data contains relationships between the variables (correlations, covariances).
As a result, we can express the data matrix values in a different format using the relationships between columns.
The new format is achieved by rotating and scaling the data into its component parts - eigenvectors.
This is called a matrix decomposition; we are decomposing the original matrix into a different representation: three new matrices.
When multiplied together the matrices give the original data \(X\).
\[X = USV^T\]
We now have three new matrices \(U\), \(S\) and \(V^T\), note that \(V^T\) is V transposed.
Our original data \(X\) was organised by columns (vectors), our new matrices represent a transformation into new dimensions.
The new representation of our data is in the plane (component axis) that give us the largest variance possible.
In the below diagrams we need to keep in mind that both sides of the equals sign are equivalent. On the left we have our initial data. On the right we have a decomposition of the original data, that has rotations and scaling.
- U rotates our data into a new set of dimensions
- S scales our data to its new values in the new dimensions (a diagonal matrix)
- \(V^T\) rotates the data back to the original dimensions
We can later inspect the matricies U, S and \(V^T\) to understand our data and the importance of different features.
It’s important to note that these new dimensions created will not correspond directly to our original columns. Some of our new dimensions are more important to represent the variance than others.
Each principal component axis has a special property relating them to each other: they are orthogonal. This means that they are all at right angles to each other and therefore do not interact.
This is a difficult topic that takes time, and application to understand.
We will now look at why decomposing our data into three matrices is actually useful.
3 Principal Component Analysis
Principal Component Analysis (PCA) is a method used to reduce the number of dimensions in a matrix, this relies on SVD.
The “Principle Components” are the new columns (dimensions) after rotating our data.
3.1 Reducing to \(k\) dimensions
When we perform SVD and rotate our data inline with dimensions which maximised the variations in our data, we get our columns of \(U\) ordered by how much variance of the original data they explain. The scale of this is given by the elements in \(S\).
The “principle components” are the new dimensions/features after rotating our data. The rotated dataset in the direction of principal components is given by \(US\).
Our goal is to reduce the dimensions of our dataset. We only want the \(k\) most important dimensions and can discard the rest.
Because our data is ordered by importance (the variance in each dimension), we keep only the first \(k\) columns of \(U\) and first \(k\) rows and columns of \(S\).
This new matrix \(X' = U'S'\) is the reduced dimension data set that we can now use in our machine learning models
The decomposition and subsequent removal of less important principle components can be considered equivalent to our function:
\[f(X) \rightarrow X'\]
To transform our data back to the original dimensions we multiply:
\(U'S'\) by \(V^T\)‘. Where \(V^T\)’ is the first \(k\) rows of \(V^T\).
\[ X'' = U'S' V^{T}{'} \]
This is equivalent to the function \(g(X')\) discussed in the previous section.
\[g(X') \rightarrow X''\]
The difference between \(X''\) and \(X\) gives us the quality of the dimension reduction process.
Each principal component will explain some amount of the variance of the original data. The first component will explain the largest amount, then decreasing from there.
If we take \(k = p\) (new columns = old columns), the components of our decomposition will (in theory) explain all the variance of the original data.
When we take \(k~<~p\) then the explained variance of our new reduced data will be less than that of the full decomposition.
By removing principle components, we remove some explained variance. However, by removing the components in order of least to most important we can still explain much of the variance.
The later components, hopefully, represent more unimportant feature combinations which could be noise terms. By removing these terms we may infact have better resulting model performance.
We will use a ratio of how much variance is explained by our new data when compared to using all principle components.
3.2 Implementing PCA
Thankfully, we do not need to directly implement this matrix decomposition ourselves. As with many machine learning topics, there is class within sklearn that performs the necessary steps to decompose our data and remove the specified number of components.
To perform PCA our data needs to be properly scaled, which sklearn handles itself.
The most important hyperparameter for using PCA is the value of \(k\), how many components we want.
As with all hyperparameters in sklearn you can optimise a model with hyperparameter tuning.
Instead of using the notation of \(k\), sklearn has the parameter n_components which describes the same thing; how many components we want our final data to have.
3.3 Data Example
We are going to use a new data set called high_dimension. This was a data set created to practice classification problems with.
from sklearn.decomposition import PCAhigh_dim = np.loadtxt("../../data/high_dimensions.csv", delimiter=",")The final column of our data set is the target column y.
We will be using data throughout this course that has been stored with features X and targets y in the same file. Given n columns we will take the nth column as y and the n to n-1 columns as X. We will then reduce the dimensionality of X and explore any relationships with y.
high_dim.shapehigh_dim# Splitting the data set
X = high_dim[:,0:-1]
y = high_dim[:,-1]
unique_y, y_counts = np.unique(y, return_counts=True)
print("X shape:", X.shape)
print("y shape:", y.shape)
print("Unique values of y: ", unique_y)
print("Counts of y: ", y_counts)The target values in y are \(0\) and \(1\), with counts of \(400\) and \(600\) respectively.
We now have a data set X of 1000 rows and 100 features.
- \(n = 1000\)
- \(p = 100\)
This is clearly too many for us to plot or analyse one by one. We are going to reduce the number of dimensions to \(k=5\).
# We are choosing K here aribitrarily
k = 5
# Create PCA object
pca = PCA(n_components=k, random_state=123)
# Fit the PCA object to the data (performs SVD and reduce dimensions)
pca.fit(X);# Transform the data into reduced dimensions (Remove components)
X_reduced = pca.transform(X)
X_reduced.shapeOur new data X_reduced has \(n\) rows and \(k\) dimensions.
From the pca object we can get characteristic properties of the components.
Firstly we get the principle components found themselves, given by pca.components_, these are vectors in our original feature space. The vectors represent a direction of maximum variance. The magnitude of each element in each component tells us how influential that feature is within that component.
Each of our 5 components will contain 100 values. Each value in the component corresponds to the projection of the original feature column to the new principal component.
# Shape of resulting matrix
pca.components_.shape# Shape of single component
pca.components_[0].shape# The first component of the PCA in the original data space
pca.components_[0]Next we have the explained variance of each component, given by pca.explained_variance_.
pca.explained_variance_We can see here that the first component explains more variance than the second and so on.
However, it is more useful for us to look at the explained variance ratio of each component.
This will help us judge how useful each component is at explaining the variance of the data.
evr = pca.explained_variance_ratio_
# Produce a percentage
evr_pct = evr*100
print("The explained variance of the first component is: {} %".format(evr_pct[0].round(2)))
print("The explained variance of the second component is: {} %".format(evr_pct[1].round(2)))
print("The total explained variance of all {} components are: {} %".format(k, evr_pct.sum().round(2)))We can see here that our first component explains some amount of the variance, and all of the \(k=5\) components explain the more of the variance, but information about the original data is clearly lost!
Perform PCA again on \(X\), this time with \(n\_components=10\), what is the difference in total explained variance between using five and ten components? Which number of components might we prefer to use?
# Write your code here4 Effects of Dimension Reduction on Models
We are now able to perform PCA on our data set to make new data using principle components. We will now explore what impact this has in a supervised machine learning context.
4.0.1 Visualising data
We are first going to look at the new data in two dimensions. Using our target attribute, we can visualise how our classification model might learn relationships in the new data. Visualising all the data at once is something we would not be able to do with our original data.
It’s important to note that we have not used y yet to train anything, the visualisation shows the natural spread of the classes using only X.
k = 2
# Create the new PCA object with k=2
pca = PCA(n_components=k)
# Transform the data into two dimensions
X_pca = pca.fit_transform(X)
print("Total variance ratio explained by {} components: {}%".format(k, (pca.explained_variance_ratio_.sum()*100).round(3)))first_dimension = X_pca[:,0]
second_dimension = X_pca[:,1]
# Plotting data in new dimensions
# Adding the colour as the class labels lets us see
# how the components correspond to target class values
plt.scatter(x=first_dimension, y=second_dimension, c=y, alpha=0.2, cmap="cool")
plt.axis("equal");We can see that based on the target classes the two dimension projection of our data could help predict values as there seems to be some separation occuring.
4.0.2 Comparing Performance in Prediction
We are going to use a simple, untuned Logistic Regression model to compare how well our reduced dimension data performs compared to the original set.
This is done using the F1 score as our metric with training and test splits.
First the data needs to be centred for fair comparison as PCA does this for us.
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score
# Find the mean of each column used for centring
column_means = np.mean(X, axis=0)
# Center the data
# This helps the PCA perform better
X_centred = X - column_means
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X_centred,
y,
test_size=0.33,
random_state=42)# Train the model
original_model = LogisticRegression().fit(X_train, y_train)
# Generate predictions
y_pred = original_model.predict(X_test)
# Measure performance
original_score = f1_score(y_test, y_pred)
print("Original data score:", original_score)Let us make a function that does the PCA, model training and prediction steps for us given the input data and number of desired dimensions.
def PCA_train_predict_score(X, y, k, random_state=42):
"""
Function to perform PCA, generate Logistic Regression model
evaluating performance of given k components.
Parameters
----------
X : np.array (n,p)
Feature data
y : np.array (n,)
Target data
k : int
Number of components for PCA to return
random_state : int
Random seed for train_test_split
Returns
-------
score : float
f1_score result of model trained.
"""
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.33,
random_state=random_state)
# Fit the PCA model on the training data
pca = PCA(n_components=k).fit(X_train)
# Apply our learned PCA transformation from the test data on the
# training and test sets.
# This puts the test data into the same component factor space as
# the trained data
X_train = pca.transform(X_train)
X_test = pca.transform(X_test)
# Train the model
model = LogisticRegression().fit(X_train, y_train)
# Generate predictions and score the predictions
# Using f1 score described in Chapter 3
y_pred = model.predict(X_test)
score = f1_score(y_test, y_pred)
return score
Reminder - the f1 score is calculated to measure the performance of prediction using:
\[ F1 = 2 \cdot \frac{precision \cdot recall}{precision + recall} \]
or
\[ F1 = \frac{TP}{TP + \frac{1}{2} (FP + FN)} \]
# The maximum number of components is given by the original feature number
k_max = X.shape[1]
# Create a list of k_values
k_values = list(range(1, k_max + 1))
# Generate F1 scores for each k value of PCA
f1_scores = [PCA_train_predict_score(X, y, k) for k in k_values]# Plot and compare the resulting performance of different values of k
plt.title("F1 score over all values of k")
plt.xlabel("$k$")
plt.ylabel("F1 score")
plt.ylim(0.75, 0.88)
plt.hlines(y=original_score, xmin=-1, xmax=k_max, label="No-PCA", colors="grey", linestyles="dotted")
plt.scatter(x=k_values, y=f1_scores, c=f1_scores)
plt.legend();We can see from the above data, that choosing to reduce our dimensionality can improve our model’s predictive performance in some cases. This is because it can remove unimportant information in the original data set: statistical noise.
There are three trends here to note:
- Starting from \(k = 1\) the
f1performance increases with increasing \(k\). Adding more significant dimensions gives our model more information to train on, producing a better score. - There is a peak somewhere between \(k=5 - 20\). Above this amount our performance gradually declines. This is because our model is overfit to the training data. It is creating principle components using noise not real trends - this propogates through to the model to reduce performance.
- At a higher number of \(k\) adding more components decreases the model’s performance more significantly, as we tend towards \(k \rightarrow 100\) the performance of our model gets closer to the value of the model trained without PCA.
From this we can see that selecting the right value of \(k\) can significantly (~5%) improve our model’s predictive performance.
To best determine an optimal value of \(k\) we could perform a cross validation for a range of values of \(k\). We could then compare the performance of the training and test f1 scores. We could then choose the value of \(k\) with the best test f1 scores. With the best \(k\) value found we would then continue our modelling process.
The method shown in the figure above is a more simplistic version of the cross validation approach. The cross validation approach will provide a more statistically robust result. Instead of using train_test_split you could instead use sklearn.model_selection.cross_validate or related methods.
Side note: the “true” number of components used to create this data set was \(k=10\).
What value of \(k\) has the highest associated F1 score? What does that tell you about our data?
HINT Look up the numpy function \(np.argmax()\)
# Write your code here4.0.3 Comparing Training Time
As the size of our data gets large, the time taken to train a model increases drastically. One consideration for how many dimensions of data to choose is how long will it take to train our model. We often have trade-offs between accuracy, time and memory contraints in computing.
We are going to modify our PCA_train_predict_score() function to measure how long it takes to train a model in addition to the F1 score. We can then compare each model. In this example we are going to just look at how long the model training takes, not how long the whole processing takes.
We are using a relatively small data set, so the scale of times may be small
import time
def PCA_train_predict_score_time(X, y, k, random_state=42):
"""
Function to perform PCA and generate model to check model performance
and training time of given k components.
Parameters
----------
X : np.array (n,p)
Feature data
y : np.array (n,)
Target data
k : int
Number of components
random_state : int
Random seed for train_test_split
Returns
-------
score : float
f1_score result of model trained.
training_time : float
The time taken to train the model
"""
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.33,
random_state=random_state)
# Fit the PCA to the training data
pca = PCA(n_components=k).fit(X_train)
# Our test data needs to be PCA'd on the original PCA model fit
X_train = pca.transform(X_train)
X_test = pca.transform(X_test)
# Create the model object
estimator = LogisticRegression()
# Get the time at the start and end of the model training
start_time = time.time()
model = estimator.fit(X_train, y_train)
finish_time = time.time()
# Generate predictions and score
y_pred = model.predict(X_test)
score = f1_score(y_test, y_pred)
# Calculate total training time
training_time = finish_time - start_time
return score, training_timeGenerate the time and performance data for each value of \(k\) we are interested in.
# The maximum number of components is given by the original feature number
k_max = X.shape[1]
# Create a list of k_values
k_values = list(range(1, k_max + 1))
# Generate F1 scores and model training time for each k value of PCA
f1_scores_times = [PCA_train_predict_score_time(X, y, k) for k in k_values]
# Separate the list of tuples into two lists
f1_scores, times = zip(*f1_scores_times)Getting our non-PCA performance to compare with:
# Split the data
X_train, X_test, y_train, y_test = train_test_split(X_centred,
y,
test_size=0.33,
random_state=42)
# Create the model object
original_estimator = LogisticRegression()
# Get the time at the start and end of the model training
start_time = time.time()
original_model = original_estimator.fit(X_train, y_train)
finish_time = time.time()
# Generate predictions and evalute the performance
y_pred = original_model.predict(X_test)
original_score = f1_score(y_test, y_pred)
# Calculate the time taken to train
original_time = finish_time - start_timePlotting results of the measurements of time taken to train shows that the fewer dimensions used to train the data the faster the model trains.
However, we can see that the fastest models to train are not the best performing models.
There is a trade-off between number of dimensions reduced to, the speed of training and the model’s final performance.
plt.scatter(k_values, times, c=f1_scores)
cbar = plt.colorbar()
cbar.set_label("F1 Score")
plt.title("Model training time over all values of k")
plt.xlabel("$k$")
plt.ylabel("Training time (s)")
plt.ylim(0, 0.08)
plt.hlines(y=original_time, xmin=-1, xmax=101, label="Without PCA", colors="grey", linestyles="dotted")
plt.legend(loc="upper left");We can see that the lowest \(k\) have the fastest speed, but low performance. Higher \(k\) improve the model performance and are faster than the original model, however increasing \(k\) too much decreases performance and speed.
Using \(X\) fit a PCA model with \(k = 50\).
Plot the cumulative sum of variance explained ratio of each component.HINT: use the numpy \(np.cumsum()\) function
# Write your code here5 PCA Variants
5.0.1 Kernel PCA
The PCA method discussed so far creates a set of dimensions that are orthogonal to each other (at right angles). This is a good assumption for some data; however this linear model will not capture important aspects of all data.
The relationships between some features may not be linear, by using Kernel PCA we can capture this non-linear behaviour.
One common example of non-linear data properties that linear PCA cannot adequately express is circularly separated data.
An example of this is given below.
# Load in our concentric circle data
circle_data = np.loadtxt("../../data/circles.csv", delimiter=",")# Split X and Y data
X_circles, y_circles = circle_data[:,0:-1], circle_data[:,-1]In this data set we have three columns, the first two are the continuous data features. The third is a binary target attribute.
plt.figure(figsize=(6, 6))
plt.title("Original Dimensions of Features")
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.axis("equal")
plt.scatter(x=X_circles[:,0], y=X_circles[:,1], c=y_circles, cmap="coolwarm");
Using \(X\_circles\) perform linear PCA with \(k = 2\). Plot the resulting new dimensions of data. Calculate the total explained variance ratio of the model and discuss why it may be difficult for a linear machine learning model to learn the pattern of this data.
# Write your code hereWe can see that this will reduce the dimensions of our data, but this new representation is similar to the original. Using a linear PCA doesn’t separate our two classes in an intuitive way.
Using Kernel PCA we can use the non-linear structure of the data to find the best components.
We have multiple options for the choice of kernel that is most appropriate, these include:
- “linear” - fits a linear kernel
- “poly” - fits a polynomial (function of order > 1) function
- “rbf” - fits a radial basis function kernel
- and more!
For more information on kernels and how sklearn defines them check here.
Our data is in concentric circles, so we are going to choose the radial basis function as it can represent circles.
The parameter gamma is the kernel coefficient, which determines the strength of the kernel function’s effect.
from sklearn.decomposition import KernelPCA
kpca = KernelPCA(n_components=2, kernel="rbf", gamma=2)
X_kpca = kpca.fit_transform(X_circles)plt.figure(figsize=(6, 6))
plt.title("New dimensions using rbf kernel")
plt.xlabel("$component~1$")
plt.ylabel("$component~2$")
plt.axis("equal")
plt.scatter(x=X_kpca[:,0], y=X_kpca[:,1], c=y_circles, cmap="coolwarm");This makes the difference between the two concentric circles in the original data clearer in the new dimensions, they are better separated.
Our machine learning model would find it easier to learn the difference between the classes in this case rather than when the data was in circles.
A really simple model, such as defining the predicted class purely based on the value of component 1 would perform very well!
Perform Kernel PCA on the \(X\_circles\) data set using the “rbf” kernel and \(k = 2\).
Use values of \(gamma=1\) and \(gamma=7\). What effect does this have?
Plot the results of each value.
# Write your code here5.0.2 Incremental PCA
PCA is a very powerful tool in reducing the size of our data. However, traditional methods for calculating the SVD of a data set require all the data to be held within the computer memory at once.
This can be impractical for data sets which are larger than our computer memory.
In this case we can use Incremental PCA, which partially fits the PCA over batches of data samples. This means we are only fitting to the amount of data in one single batch of data at any one time, reducing the size of memory needed.
The sklearn implementation of this method by default sets the batch_size= 5 * n_features.
This method will produce different, but similar principle components to linear PCA.
This example uses data that would fit in the computer memory as a toy example.
Using \(ipca\_20\) fit the data to \(X\). Compare the total variance explained ratio of \(ipca\) with the original PCA method using \(k=20\).
from sklearn.decomposition import IncrementalPCA
ipca_20 = IncrementalPCA(n_components=20)
# Write your code here6 Factor Analysis
PCA and it’s variants are one subset of dimension reduction for analysis methods.
When performing dimension reduction techniques we map our data from it’s original space to a new space that represents combinations of features. These combinations of features have meaning as latent variables. We have different approaches to the mapping / combining of features. By choosing different methods we can interpret our data in different ways.
Another approach, which is similar to PCA, but with key difference is Factor Analysis. Instead of principal components as our resulting space to analse we instead have factors.
The classical PCA we looked at first has the following characteristics:
- Data is continuous
- Components are orthogonal / not correlated
Approaches in Factor Analyis allow us to use different data types other than continuous variables, and let us analyse latent variables in different ways.
Our features are a combination of weighted factors. We can specify how many factors we want to find in our data
Consider a data set \(X\) with \(p\) features/variables \(X_1, X_2, ..., X_p\).
After performing Factor Analysis we find \(m\) factors, where \(m < p\) (remember we are trying to reduce our dimensions!) \(F_1, F_2, ..., F_m\).
Each feature \(X_i\) is: \[X_i = a_{i1}F_1 + a_{i2}F_2 + ... +a_{im}F_m + e_i \]
Where \(a_i\)’s are the factor score - explaining how important each factor is in explaining the feature. The value \(e_i\) is the error that cannot be explained by a factor.
Some factors will explain some features well, and have high factor scores. Some will have low factor scores, meaning the factor does not explain the feature well. The goal of factor analysis is to separate out features such that combinations of features are explained by separate factors.
Factor Analysis can be performed in a similar way to PCA using sklearn.decomposition.FactorAnalysis. This method of factor analysis uses maximum likelihood estimate to build the factors - and as such its performance can be evaluated using a log-likelihood.
from sklearn.decomposition import FactorAnalysis
high_dim = np.loadtxt("../../data/high_dimensions.csv", delimiter=",")
X, y = high_dim[:,0:-1], high_dim[:,-1]
# Create 10 factors to analyse
fa = FactorAnalysis(n_components=10).fit(X)
factors = fa.transform(X)
# look at first factor
factors[:,0]fa.score(X)6.1 Further Factor Analysis
The above basic method of Factor Analysis can be very powerful. It is often better to use factor analysis for exploratory data analysis. Factor Analysis assumes that the noise in each feature is gaussian, but not the same distribution for each feature.
The sklearn implimentation above is used on continuous data. However, we will often instead want to work with categorical data instead (ordinal or nominal). There are other methods of FA that can be used in different data type situations.
These are beyond the scope of this course, but added for reference.
- Using a contingency table (cross-tab or pivot): Correspondence Analysis (CA)
- More than 2 features that are all categorical: Multiple Correspondence Analysis (MCA)
- Groups of Categorical or numerical features, not both: Multiple Factor Analysis (MFA)
- Both categorical and numerical features: Factor Analysis of Mixed Data (FAMD)
In Python, these methods can be used in a similar syntax to sklearn using the prince packages. For more information on the prince package, available on pip see the prince documentation on GitHub
7 Dimension Reduction for Visualisation
PCA is a common and powerful method for dimension reduction, however, there are limitations associated with it.
We have seen that reducing the data leads to potential model performance increases, as well and increases in training speed. We can also use dimension reduction as a method for visualising high dimension data.
Data with large numbers of dimensions (columns) is hard to comprehend, we cannot see the data all at once. The natural structure between features will escape us visually if our data has more than three dimensions.
By reducing the number of dimensions in our data set we can see some of the relationships between each sample.
We can achieve this with PCA, but we will look at another method for doing so. Each method has benefits and challenges associated.
7.1 t-SNE
One other method to reduce the data dimensionality for visualisation is t-distributed Stochastic Neighbour Embedding (t-SNE).
Let’s break down the terms in that name:
- Stochastic - this method uses random processes
- Neighbour - the method uses the affinity between data near each sample
- Embedding - the method creates new vector spaces
- t-distributed - the method creates joint probabilities of the original dimension affinities
In short; the t-SNE method works out the local structures in the data by mapping similar data near each other in the new dimension space. The new dimension space typically has two dimensions.
This method uses a Nearest-Neighbour search, in a similar way to the KNN algorithm, it is therefore important that we scale the data to be within the same range before we pass it to the t-SNE dimension reducer.
(In our case \(X\) is already adequately scaled)
Warning: t-SNE can be very slow
Below we are reducing the high dimension data set \(X\) used for linear-PCA.
from sklearn.manifold import TSNE
# The random state initialization can significantly impact our output
tnse = TSNE(n_components=2, random_state=123)
# Fit transform the data into the t-SNE embedding dimensions
X_tsne = tnse.fit_transform(X)plt.figure(figsize=(6, 6))
plt.title("New dimensions using t-SNE")
plt.xlabel("$component~1$")
plt.ylabel("$component~2$")
plt.axis("equal")
plt.scatter(x=X_tsne[:,0], y=X_tsne[:,1], c=y, alpha=0.4, cmap="coolwarm");We can now look at our data which was 100 dimensions and understand something about it’s structure.
Using “../../data/bikes.csv”, keep in only the boolean and numeric data, clean the data, visualise the data in 2D using t-SNE and the target “count” attribute.
Compare t-SNE and PCA in 2D for this data.
What value of \(k\) is needed have the total explained variance ratio \(> 0.8\) for linear PCA?
# Write your code hereA newer method for dimension reduction using manifold learning is called umap, it’s usage is well beyond the scope of this course. It is worth considering instead of t-SNE due to improved performance and speed. It’s python implimentation follows the same structure as the result of sklearn, but from a separate package called UMAP.
8 Summary
This chapter has been an introduction into dimension reduction techniques. There are many decisions to make regarding how/whether you reduce the dimensionality of your data. These fall under the general categories of method selection and desired dimension number.
We have covered the theory of lossy compression, this showed how we reduce information using encoding and decoding and the effect this has. We then looked at the linear algebra method of SVD which allowed us to break down our data into informative component matrices. Using this SVD technique we could then analyse the principle components of the data, and subsequently reduce the dimensionality of the data.
We then looked at the effects of reducing our data dimensions, which were changes to model prediction performance and training time.
Next, there was a discussion of some of the issues with PCA, inlcuding it’s linear assumptions and memory requirements, which naturally lead to alternative implementations:
- Kernel PCA for non-linear dimension reduction
- Incremental PCA for data sets greater than the memory of our machine
We also looked at and discussed when to use different Factor Analysis methods.
The next section focused on a different method of dimension reduction to achieve useful data visualisation; t-SNE.
Dimension reduction can be used in two ways, to gather greater insights into our data with exploratory data analysis, visualisation and for improving our machine learning model’s performance.
We have been exploring elements of our training data in this chapter, next we will look into understanding the structure of the data sets in greater depth. We will look at clustering as a way to group our data using a variety of methods that give insight on our data.